-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use storage records api in e2e #4373
Conversation
Progress at last 🎉 . Are there instructions somewhere on how do I try it out? |
added a readme @tomaskikutis |
I see you removed the existing prepopulate thing entirely and probably created a full dump from it. The problem is that prepopulate script produced a corrupted DB in a sense that schema wasn't respected and you would find all kinds of states that are not possible in production. My idea was to continue using prepopulate for old tests, but make a new full dump for new tests. That still can be done in a way - I would rename existing full dump to In summary, for this PR it'd be enough if you could rename current full dumps to |
by the way, I'm wondering now, is there a way to modify the full dump instead of creating a new record? I'm thinking that for 90% of the cases we'd want one big snapshot/dump to be able to speed up and parallelize test runs, and for the edge cases we'd use records where a specific state needs to be simulated that can't co-exists in the main dump. |
not sure what data should be in this full dump, we don't have any initial dataset.
I think that's the usecase for records, it's just a diff from the full dump. |
I'd initialize the DB, add mandatory metadata and then 1 desk, 1 stage, 1 article and that'd be good for a start. Later we could add things as we go along.
I know what records are useful for. What I was asking is whether we can also append stuff to the main dump and not create a record? |
what's the usecase? it still sounds to me as a new record but I'm probably missing something |
Imagine we write 100 tests per year. Would it mean then that after that everytime before e2e runs, 100 records need to be applied to the base dump to generate the latest database state? If it's fast I don't mind, but I suspect it might be a bit inefficient. |
No description provided.